99 research outputs found

    ATLAS Great Lakes Tier-2 Computing and Muon Calibration Center Commissioning

    Get PDF
    Large-scale computing in ATLAS is based on a grid-linked system of tiered computing centers. The ATLAS Great Lakes Tier-2 came online in September 2006 and now is commissioning with full capacity to provide significant computing power and services to the USATLAS community. Our Tier-2 Center also host the Michigan Muon Calibration Center which is responsible for daily calibrations of the ATLAS Monitored Drift Tubes for ATLAS endcap muon system. During the first LHC beam period in 2008 and following ATLAS global cosmic ray data taking period, the Calibration Center received a large data stream from the muon detector to derive the drift tube timing offsets and time-to-space functions with a turn-around time of 24 hours. We will present the Calibration Center commissioning status and our plan for the first LHC beam collisions in 2009.Comment: To be published in the proceedings of DPF-2009, Detroit, MI, July 2009, eConf C09072

    Networking Areas of Interest for HEP (White Paper)

    Get PDF

    Networking Areas of Interest for HEP (Presentation)

    Get PDF

    The Ultralight project: the network as an integrated and managed resource for data-intensive science

    Get PDF
    Looks at the UltraLight project which treats the network interconnecting globally distributed data sets as a dynamic, configurable, and closely monitored resource to construct a next-generation system that can meet the high-energy physics community's data-processing, distribution, access, and analysis needs

    The Design and Demonstration of the Ultralight Testbed

    Get PDF
    In this paper we present the motivation, the design, and a recent demonstration of the UltraLight testbed at SC|05. The goal of the Ultralight testbed is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network- focused approach. UltraLight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. To achieve its goal we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we will first present early results in the various working areas of the project. We then describe our experiences of the network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many Grid computing sites

    The Motivation, Architecture and Demonstration of Ultralight Network Testbed

    Get PDF
    In this paper we describe progress in the NSF-funded Ultralight project and a recent demonstration of Ultralight technologies at SuperComputing 2005 (SC|05). The goal of the Ultralight project is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Ultralight adopts a new approach to networking: instead of treating it traditionally, as a static, unchanging and unmanaged set of inter-computer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. Thus we are constructing a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community. In this paper we present the motivation for, and an overview of, the Ultralight project. We then cover early results in the various working areas of the project. The remainder of the paper describes our experiences of the Ultralight network architecture, kernel setup, application tuning and configuration used during the bandwidth challenge event at SC|05. During this Challenge, we achieved a record-breaking aggregate data rate in excess of 150 Gbps while moving physics datasets between many sites interconnected by the Ultralight backbone network. The exercise highlighted the benefits of Ultralight's research and development efforts that are enabling new and advanced methods of distributed scientific data analysis

    Monitoring the US ATLAS Network Infrastructure with perfSONAR-PS

    Full text link
    Global scientific collaborations, such as ATLAS, continue to push the network requirements envelope. Data movement in this collaboration is routinely including the regular exchange of petabytes of datasets between the collection and analysis facilities in the coming years. These requirements place a high emphasis on networks functioning at peak efficiency and availability; the lack thereof could mean critical delays in the overall scientific progress of distributed data-intensive experiments like ATLAS. Network operations staff routinely must deal with problems deep in the infrastructure; this may be as benign as replacing a failing piece of equipment, or as complex as dealing with a multi-domain path that is experiencing data loss. In either case, it is crucial that effective monitoring and performance analysis tools are available to ease the burden of management. We will report on our experiences deploying and using the perfSONAR-PS Performance Toolkit at ATLAS sites in the United States. This software creates a dedicated monitoring server, capable of collecting and performing a wide range of passive and active network measurements. Each independent instance is managed locally, but able to federate on a global scale; enabling a full view of the network infrastructure that spans domain boundaries. This information, available through web service interfaces, can easily be retrieved to create customized applications. The US ATLAS collaboration has developed a centralized “dashboard” offering network administrators, users, and decision makers the ability to see the performance of the network at a glance. The dashboard framework includes the ability to notify users (alarm) when problems are found, thus allowing rapid response to potential problems and making perfSONAR-PS crucial to the operation of our distributed computing infrastructure.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/98635/1/1742-6596_396_4_042038.pd

    The Design and Implementation of the Transatlantic Mission-Oriented Production and Experimental Networks

    Get PDF
    In this paper we present the design and implementation of the mission-oriented USLHCNet for HEP research community and the UltraLight network testbed. The design philosophy for these networks is to help meet the data-intensive computing challenges of the next generation of particle physics experiments with a comprehensive, network-focused approach. Instead of treating the network as a static, unchanging and unmanaged set of intercomputer links, we are developing and using it as a dynamic, configurable, and closely monitored resource that is managed from end-to-end. In this paper we will present our work in the various areas of the project including infrastructure construction, protocol research and application development. Our goal is to construct a next-generation global system that is able to meet the data processing, distribution, access and analysis needs of the particle physics community
    corecore